Goto

Collaborating Authors

 linear velocity


Unscented Particle Filter for Visual-inertial Navigation using IMU and Landmark Measurements

Ghanizadegan, Khashayar, Hashim, Hashim A.

arXiv.org Artificial Intelligence

This paper introduces a geometric Quaternion-based Unscented Particle Filter for Visual-Inertial Navigation (QUPF-VIN) specifically designed for a vehicle operating with six degrees of freedom (6 DoF). The proposed QUPF-VIN technique is quaternion-based capturing the inherently nonlinear nature of true navigation kinematics. The filter fuses data from a low-cost inertial measurement unit (IMU) and landmark observations obtained via a vision sensor. The QUPF-VIN is implemented in discrete form to ensure seamless integration with onboard inertial sensing systems. Designed for robustness in GPS-denied environments, the proposed method has been validated through experiments with real-world dataset involving an unmanned aerial vehicle (UAV) equipped with a 6-axis IMU and a stereo camera, operating with 6 DoF. The numerical results demonstrate that the QUPF-VIN provides superior tracking accuracy compared to ground truth data. Additionally, a comparative analysis with a standard Kalman filter-based navigation technique further highlights the enhanced performance of the QUPF-VIN.


Robust Visual Servoing under Human Supervision for Assembly Tasks

Fernandez-Ayala, Victor Nan, Silva, Jorge, Guo, Meng, Dimarogonas, Dimos V.

arXiv.org Artificial Intelligence

We propose a framework enabling mobile manipulators to reliably complete pick-and-place tasks for assembling structures from construction blocks. The picking uses an eye-in-hand visual servoing controller for object tracking with Control Barrier Functions (CBFs) to ensure fiducial markers in the blocks remain visible. An additional robot with an eye-to-hand setup ensures precise placement, critical for structural stability. We integrate human-in-the-loop capabilities for flexibility and fault correction and analyze robustness to camera pose errors, proposing adapted barrier functions to handle them. Lastly, experiments validate the framework on 6-DoF mobile arms.


Quaternion-based Unscented Kalman Filter for 6-DoF Vision-based Inertial Navigation in GPS-denied Regions

Ghanizadegan, Khashayar, Hashim, Hashim A.

arXiv.org Artificial Intelligence

This paper investigates the orientation, position, and linear velocity estimation problem of a rigid-body moving in three-dimensional (3D) space with six degrees-of-freedom (6 DoF). The highly nonlinear navigation kinematics are formulated to ensure global representation of the navigation problem. A computationally efficient Quaternion-based Navigation Unscented Kalman Filter (QNUKF) is proposed on $\mathbb{S}^{3}\times\mathbb{R}^{3}\times\mathbb{R}^{3}$ imitating the true nonlinear navigation kinematics and utilize onboard Visual-Inertial Navigation (VIN) units to achieve successful GPS-denied navigation. The proposed QNUKF is designed in discrete form to operate based on the data fusion of photographs garnered by a vision unit (stereo or monocular camera) and information collected by a low-cost inertial measurement unit (IMU). The photographs are processed to extract feature points in 3D space, while the 6-axis IMU supplies angular velocity and accelerometer measurements expressed with respect to the body-frame. Robustness and effectiveness of the proposed QNUKF have been confirmed through experiments on a real-world dataset collected by a drone navigating in 3D and consisting of stereo images and 6-axis IMU measurements. Also, the proposed approach is validated against standard state-of-the-art filtering techniques. IEEE Keywords: Localization, Navigation, Unmanned Aerial Vehicle, Sensor-fusion, Inertial Measurement Unit, Vision Unit.


FilMBot: A High-Speed Soft Parallel Robotic Micromanipulator

Yu, Jiangkun, Bettahar, Houari, Kandemir, Hakan, Zhou, Quan

arXiv.org Artificial Intelligence

Soft robotic manipulators are generally slow despite their great adaptability, resilience, and compliance. This limitation also extends to current soft robotic micromanipulators. Here, we introduce FilMBot, a 3-DOF film-based, electromagnetically actuated, soft kinematic robotic micromanipulator achieving speeds up to 2117 $\deg$/s and 2456 $\deg$/s in $\alpha$ and $\beta$ angular motions, with corresponding linear velocities of 1.61 m/s and 1.92 m/s using a 4-cm needle end-effector, and 1.57 m/s along the Z axis. The robot can reach ~1.50 m/s in path-following tasks, operates at frequencies up to 30 Hz, and remains functional up to 50 Hz. It demonstrates high precision (~6.3 $\mu$m, or ~0.05% of its workspace) in small path-following tasks. The novel combination of the low-stiffness soft kinematic film structure and strong electromagnetic actuation in FilMBot opens new avenues for soft robotics. Furthermore, its simple construction and inexpensive, readily accessible components could broaden the application of micromanipulators beyond current academic and professional users.


Robust High-Speed State Estimation for Off-road Navigation using Radar Velocity Factors

Nissov, Morten, Edlund, Jeffrey A., Spieler, Patrick, Padgett, Curtis, Alexis, Kostas, Khattak, Shehryar

arXiv.org Artificial Intelligence

Enabling robot autonomy in complex environments for mission critical application requires robust state estimation. Particularly under conditions where the exteroceptive sensors, which the navigation depends on, can be degraded by environmental challenges thus, leading to mission failure. It is precisely in such challenges where the potential for FMCW radar sensors is highlighted: as a complementary exteroceptive sensing modality with direct velocity measuring capabilities. In this work we integrate radial speed measurements from a FMCW radar sensor, using a radial speed factor, to provide linear velocity updates into a sliding-window state estimator for fusion with LiDAR pose and IMU measurements. We demonstrate that this augmentation increases the robustness of the state estimator to challenging conditions present in the environment and the negative effects they can pose to vulnerable exteroceptive modalities. The proposed method is extensively evaluated using robotic field experiments conducted using an autonomous, full-scale, off-road vehicle operating at high-speeds (~12 m/s) in complex desert environments. Furthermore, the robustness of the approach is demonstrated for cases of both simulated and real-world degradation of the LiDAR odometry performance along with comparison against state-of-the-art methods for radar-inertial odometry on public datasets.


Fuzzy Logic Control for Indoor Navigation of Mobile Robots

Kumar, Akshay, Sahasrabudhe, Ashwin, Nirgude, Sanjuksha

arXiv.org Artificial Intelligence

Autonomous mobile robots have many applications in indoor unstructured environment, wherein optimal movement of the robot is needed. The robot therefore needs to navigate in unknown and dynamic environments. This paper presents an implementation of fuzzy logic controller for navigation of mobile robot in an unknown dynamically cluttered environment. Fuzzy logic controller is used here as it is capable of making inferences even under uncertainties. It helps in rule generation and decision making process in order to reach the goal position under various situations. Sensor readings from the robot and the desired direction of motion are inputs to the fuzz logic controllers and the acceleration of the respective wheels are the output of the controller. Hence, the mobile robot avoids obstacles and reaches the goal position. Keywords: Fuzzy Logic Controller, Membership Functions, Takagi-Sugeno-Kang FIS, Centroid Defuzzification


NeuroVE: Brain-inspired Linear-Angular Velocity Estimation with Spiking Neural Networks

Li, Xiao, Chen, Xieyuanli, Guo, Ruibin, Wu, Yujie, Zhou, Zongtan, Yu, Fangwen, Lu, Huimin

arXiv.org Artificial Intelligence

Vision-based ego-velocity estimation is a fundamental problem in robot state estimation. However, the constraints of frame-based cameras, including motion blur and insufficient frame rates in dynamic settings, readily lead to the failure of conventional velocity estimation techniques. Mammals exhibit a remarkable ability to accurately estimate their ego-velocity during aggressive movement. Hence, integrating this capability into robots shows great promise for addressing these challenges. In this paper, we propose a brain-inspired framework for linear-angular velocity estimation, dubbed NeuroVE. The NeuroVE framework employs an event camera to capture the motion information and implements spiking neural networks (SNNs) to simulate the brain's spatial cells' function for velocity estimation. We formulate the velocity estimation as a time-series forecasting problem. To this end, we design an Astrocyte Leaky Integrate-and-Fire (ALIF) neuron model to encode continuous values. Additionally, we have developed an Astrocyte Spiking Long Short-term Memory (ASLSTM) structure, which significantly improves the time-series forecasting capabilities, enabling an accurate estimate of ego-velocity. Results from both simulation and real-world experiments indicate that NeuroVE has achieved an approximate 60% increase in accuracy compared to other SNN-based approaches.


On the Benefits of Visual Stabilization for Frame- and Event-based Perception

Rodriguez-Gomez, Juan Pablo, Dios, Jose Ramiro Martinez-de, Ollero, Anibal, Gallego, Guillermo

arXiv.org Artificial Intelligence

Vision-based perception systems are typically exposed to large orientation changes in different robot applications. In such conditions, their performance might be compromised due to the inherent complexity of processing data captured under challenging motion. Integration of mechanical stabilizers to compensate for the camera rotation is not always possible due to the robot payload constraints. This paper presents a processing-based stabilization approach to compensate the camera's rotational motion both on events and on frames (i.e., images). Assuming that the camera's attitude is available, we evaluate the benefits of stabilization in two perception applications: feature tracking and estimating the translation component of the camera's ego-motion. The validation is performed using synthetic data and sequences from well-known event-based vision datasets. The experiments unveil that stabilization can improve feature tracking and camera ego-motion estimation accuracy in 27.37% and 34.82%, respectively. Concurrently, stabilization can reduce the processing time of computing the camera's linear velocity by at least 25%. Code is available at https://github.com/tub-rip/visual_stabilization


Pose, Velocity and Landmark Position Estimation Using IMU and Bearing Measurements

Wang, Miaomiao, Tayebi, Abdelhamid

arXiv.org Artificial Intelligence

This paper investigates the estimation problem of the pose (orientation and position) and linear velocity of a rigid body, as well as the landmark positions, using an inertial measurement unit (IMU) and a monocular camera. First, we propose a globally exponentially stable (GES) linear time-varying (LTV) observer for the estimation of body-frame landmark positions and velocity, using IMU and monocular bearing measurements. Thereafter, using the gyro measurements, some landmarks known in the inertial frame and the estimates from the LTV observer, we propose a nonlinear pose observer on $\SO(3)\times \mathbb{R}^3$. The overall estimation system is shown to be almost globally asymptotically stable (AGAS) using the notion of almost global input-to-state stability (ISS). Interestingly, we show that with the knowledge (in the inertial frame) of a small number of landmarks, we can recover (under some conditions) the unknown positions (in the inertial frame) of a large number of landmarks. Numerical simulation results are presented to illustrate the performance of the proposed estimation scheme.


Adaptive Backstepping and Non-singular Sliding Mode Control for Quadrotor UAVs with Unknown Time-varying Uncertainties

Shevidi, Arezo, Hashim, Hashim A.

arXiv.org Artificial Intelligence

This paper presents a novel quaternion-based nonsingular control system for underactuated vertical-take-off and landing (VTOL) Unmanned Aerial Vehicles (UAVs). Position and attitude tracking is challenging regarding singularity and accuracy. Quaternion-based Adaptive Backstepping Control (QABC) is developed to tackle the underactuated issues of UAV control systems in a cascaded way. Leveraging the virtual control (auxiliary control) developed in the QABC, desired attitude components and required thrust are produced. Afterwards, we propose Quaternion-based Sliding Mode Control (QASMC) to enhance the stability and mitigate chattering issues. The sliding surface is modified to avoid singularity compared to conventional SMC. To improve the robustness of controllers, the control parameters are updated using adaptation laws. Furthermore, the asymptotic stability of translational and rotational dynamics is guaranteed by utilizing Lyapunov stability and Barbalet Lemma. Finally, the comprehensive comparison results are provided to verify the effectiveness of the proposed controllers in the presence of unknown time-varying parameter uncertainties and significant initial errors. Keywords: Non-singular Sliding Mode Control, Adaptive Backstepping Control, Unit-quaternion, Drones, Unmanned Aerial Vehicles, Asymptotic Stability, Position and Orientation Control